Efficient Algorithms Using The Multiplicative Weights Update Method

نویسنده

  • Satyen Kale
چکیده

Algorithms based on convex optimization, especially linear and semidefinite programming, are ubiquitous in Computer Science. While there are polynomial time algorithms known to solve such problems, quite often the running time of these algorithms is very high. Designing simpler and more efficient algorithms is important for practical impact. In this thesis, we explore applications of the Multiplicative Weights method in the design of efficient algorithms for various optimization problems. This method, which was repeatedly discovered in quite diverse fields, is an algorithmic technique which maintains a distribution on a certain set of interest, and updates it iteratively by multiplying the probability mass of elements by suitably chosen factors based on feedback obtained by running another algorithm on the distribution. We present a single meta-algorithm which unifies all known applications of this method in a common framework. Next, we generalize the method to the setting of symmetric matrices rather than real numbers. We derive the following applications of the resulting Matrix Multiplicative Weights algorithm: 1. The first truly general, combinatorial, primal-dual method for designing efficient algorithms for semidefinite programming. Using these techniques, we obtain significantly faster algorithms for obtaining O( √ log n) approximations to various graph partitioning problems, such as Sparsest Cut, Balanced Separator in both directed and undirected weighted graphs, and constraint satisfaction problems such as Min UnCut and Min 2CNF Deletion. 2. An Õ(n3) time derandomization of the Alon-Roichman construction of expanders using Cayley graphs. The algorithm yields a set of O(log n) elements which generates an expanding Cayley graph in any group of n elements. 3. An Õ(n3) time deterministic O(log n) approximation algorithm for the quantum hypergraph covering problem. 4. An alternative proof of a result of Aaronson that the γ-fat-shattering dimension of quantum states on n qubits is O( n γ2 ). Using our framework for the classical Multiplicative Weights Update method, we derive the following algorithmic applications: 1. Fast algorithms for approximately solving several families of semidefinite programs which beat interior point methods. Our algorithms rely on eigenvector computations, which are very efficient in practice compared to the Cholesky decompositions needed by interior point methods. We also give a matrix sparsification algorithm to speed up the eigenvector computation using the Lanczos iteration. 2. O( √ log n) approximation to the Sparsest Cut and the Balanced Separator problems in undirected weighted graphs in Õ(n2) time by embedding expander flows in the graph. This improves upon the previous Õ(n4.5) time algorithm of Arora, Rao, and Vazirani, which was based on semidefinite programming.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The Multiplicative Weights Update Method: a Meta-Algorithm and Applications

Algorithms in varied fields use the idea of maintaining a distribution over a certain set and use the multiplicative update rule to iteratively change these weights. Their analysis are usually very similar and rely on an exponential potential function. We present a simple meta algorithm that unifies these disparate algorithms and drives them as simple instantiations of the meta algorithm.

متن کامل

Path Kernels and Multiplicative Updates

Kernels are typically applied to linear algorithms whose weight vector is a linear combination of the feature vectors of the examples. On-line versions of these algorithms are sometimes called “additive updates” because they add a multiple of the last feature vector to the current weight vector. In this paper we have found a way to use special convolution kernels to efficiently implement “multi...

متن کامل

The Multiplicative Weights Update Method: A Meta Algorithm and its Applications

Algorithms in varied fields use the idea of maintaining a distribution over a certain set and use the multiplicative update rule to iteratively change these weights. Their analyses are usually very similar and rely on an exponential potential function. In this survey we present a simple meta algorithm that unifies these disparate algorithms and drives them as simple instantiations of the meta a...

متن کامل

تنظیم بهینه و همزمان ساختار و پارامترهای شبکه عصبی با استفاده از الگوریتم آمیختار مبتنی بر جستجوی گرانشی برای کاربردهای دسته‌بندی و تقریب توابع

Determining the optimum number of nodes, number of hidden layers, and synaptic connection weights in an artificial neural network (ANN) plays an important role in the performance of this soft computing model. Several methods have been proposed for weights update (training) and structure selection of the ANNs. For example, the error back-propagation (EBP) is a traditional method for weights...

متن کامل

Performance of Multiplicative-weights-updates

Let us remember the mechanics of Multiplicative-Weights-Updates: At every time t, the learner maintains a weight vector wt ≥ 0 over the experts. Given the weight vector, the probability distribution over the experts is computed as pt = wt wt·1 . The weights are initialized at w1 = 1 n · 1. (Multiplicative-weights-update step.) Given the loss vector at time t the weights are updated as follows w...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2006